翻訳と辞書
Words near each other
・ Multi-stage continuous integration
・ Multi-stage fitness test
・ Multi-stage flash distillation
・ Multi-stage programming
・ Multi-standard television
・ Multi-State Lottery Association
・ Multi-state modeling of biomolecules
・ Multi-step flow theory
・ Multi-stop truck
・ Multi-storey car park
・ Multi-subject instructional period
・ Multi-surface method
・ Multi-swarm optimization
・ Multi-system (rail)
・ Multi-tap
Multi-task learning
・ Multi-tendency
・ Multi-Terrain Pattern
・ Multi-threshold CMOS
・ Multi-tool
・ Multi-tool (powertool)
・ Multi-touch
・ Multi-Touch Collaboration Wall
・ Multi-touch, physics and gestures
・ Multi-track
・ Multi-track Turing machine
・ Multi-trials technique
・ Multi-Use Games Area
・ Multi-Use Radio Service
・ Multi-Use Simulation Models


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Multi-task learning : ウィキペディア英語版
Multi-task learning
Multi-task learning (MTL) is an approach to machine learning that learns a problem together with other related problems at the same time, using a shared representation. This often leads to a better model for the main task, because it allows the learner to use the commonality among the tasks.〔Baxter, J. (2000). A model of inductive bias learning. Journal of Artificial Intelligence Research, 12:149--198, (On-line paper )〕〔Caruana, R. (1997). Multitask learning: A knowledge-based source of inductive bias. Machine Learning, 28:41--75. (Paper at Citeseer )〕〔Thrun, S. (1996). Is learning the n-th thing any easier than learning the first?. In Advances in Neural Information Processing Systems 8, pp. 640--646. MIT Press. (Paper at Citeseer )〕 Therefore, multi-task learning is a kind of inductive transfer. This type of machine learning is an approach to inductive transfer that improves generalization by using the domain information contained in the training signals of related tasks as an inductive bias. It does this by learning tasks in parallel while using a shared representation; what is learned for each task can help other tasks be learned better.〔http://www.cs.cornell.edu/~caruana/mlj97.pdf〕
The goal of MTL is to improve the performance of learning algorithms by learning classifiers for multiple tasks jointly. This works particularly well if these tasks have some commonality and are generally slightly under sampled. One example is a spam-filter. Everybody has a slightly different distribution over spam or not-spam emails (e.g. all emails in Russian are spam for me—but not so for my Russian colleagues), yet there is definitely a common aspect across users. Multi-task learning works, because encouraging a classifier (or a modification thereof) to also perform well on a slightly different task is a better regularization than uninformed regularizers (e.g. to enforce that all weights are small).〔http://www.cse.wustl.edu/~kilian/research/multitasklearning/multitasklearning.html〕
==Techniques==


抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Multi-task learning」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.